10 research outputs found

    AGI and the Knight-Darwin Law: why idealized AGI reproduction requires collaboration

    Get PDF
    Can an AGI create a more intelligent AGI? Under idealized assumptions, for a certain theoretical type of intelligence, our answer is: “Not without outside help”. This is a paper on the mathematical structure of AGI populations when parent AGIs create child AGIs. We argue that such populations satisfy a certain biological law. Motivated by observations of sexual reproduction in seemingly-asexual species, the Knight-Darwin Law states that it is impossible for one organism to asexually produce another, which asexually produces another, and so on forever: that any sequence of organisms (each one a child of the previous) must contain occasional multi-parent organisms, or must terminate. By proving that a certain measure (arguably an intelligence measure) decreases when an idealized parent AGI single-handedly creates a child AGI, we argue that a similar Law holds for AGIs

    Continuous Interaction with a Virtual Human

    Get PDF
    Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access

    Cumulative learning

    No full text
    An important feature of human learning is the ability to continuously accept new information and unify it with existing knowledge, a process that proceeds largely automatically and without catastrophic side-effects. A generally intelligent machine (AGI) should be able to learn a wide range of tasks in a variety of environments. Knowledge acquisition in partially-known and dynamic task-environments cannot happen all-at-once, and AGI-aspiring systems must thus be capable of cumulative learning: efficiently making use of existing knowledge while learning new things, increasing the scope of ability and knowledge incrementally—without catastrophic forgetting or damaging existing skills. Many aspects of such learning have been addressed in artificial intelligence (AI) research, but relatively few examples of cumulative learning have been demonstrated to date and no generally accepted explicit definition exists of this category of learning. Here we provide a general definition of cumulative learning and describe how it relates to other concepts frequently used in the AI literature.Information and Communication Technolog

    Elckerlyc. A BML Realizer for continuous, multimodal interaction with a Virtual Human

    Get PDF
    van Welbergen H, Reidsma D, Ruttkay ZM, Zwiers J. Elckerlyc. A BML Realizer for continuous, multimodal interaction with a Virtual Human. Journal on Multimodal User Interfaces. 2010;3(4):271-284.“Elckerlyc” is a BML Realizer for generating multimodal verbal and nonverbal behavior for Virtual Humans (VHs). The main characteristics of Elckerlyc are that (1) it is designed specifically for continuous interaction with tight temporal coordination between the behavior of a VH and its interaction partner; (2) it provides a mix between the precise temporal and spatial control offered by procedural animation and the physical realism of physical simulation; and (3) it is designed to be highly modular and extensible, implementing the architecture proposed in SAIBA
    corecore